Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
19th IEEE India Council International Conference, INDICON 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2271937

ABSTRACT

A large number of people search about their health related problems on the web. However, the number of sites with qualified and verified people answering their queries is quite low in comparison to the number of questions being put up. The rate of queries being searched on such sites has further increased due to the COVID-19 pandemic. The main reason people find it difficult to find solutions to their queries is due to ineffective identification of semantically similar questions in the medical domain. For most cases, answers to the queries people ask would be present, the only caveat being the question may be present in a different form than the one asked by the particular user. In this research, we propose a Siamese-based BERT model to detect similar questions using a fine-tuning approach. The network is fine-tuned with medical question-answer pairs and then with question-question pairs to get a better question similarity prediction. © 2022 IEEE.

2.
IEEE Sensors Journal ; 23(2):969-976, 2023.
Article in English | Scopus | ID: covidwho-2244030

ABSTRACT

The recent SARS-COV-2 virus, also known as COVID-19, badly affected the world's healthcare system due to limited medical resources for a large number of infected human beings. Quarantine helps in breaking the spread of the virus for such communicable diseases. This work proposes a nonwearable/contactless system for human location and activity recognition using ubiquitous wireless signals. The proposed method utilizes the channel state information (CSI) of the wireless signals recorded through a low-cost device for estimating the location and activity of the person under quarantine. We propose to utilize a Siamese architecture with combined one-dimensional convolutional neural networks (1-D-CNNs) and bi-directional long short-term memory (Bi-LSTM) networks. The proposed method provides high accuracy for the joint task and is validated on two real-world testbeds, first, using the designed low-cost CSI recording hardware, and second, on a public dataset for joint activity and location estimation. The human activity recognition (HAR) results outperform state-of-the-art machine and deep learning methods, and localization results are comparable with the existing methods. © 2001-2012 IEEE.

3.
Inform Med Unlocked ; 37: 101156, 2023.
Article in English | MEDLINE | ID: covidwho-2234015

ABSTRACT

Patients with the COVID-19 infection may have pneumonia-like symptoms as well as respiratory problems which may harm the lungs. From medical images, coronavirus illness may be accurately identified and predicted using a variety of machine learning methods. Most of the published machine learning methods may need extensive hyperparameter adjustment and are unsuitable for small datasets. By leveraging the data in a comparatively small dataset, few-shot learning algorithms aim to reduce the requirement of large datasets. This inspired us to develop a few-shot learning model for early detection of COVID-19 to reduce the post-effect of this dangerous disease. The proposed architecture combines few-shot learning with an ensemble of pre-trained convolutional neural networks to extract feature vectors from CT scan images for similarity learning. The proposed Triplet Siamese Network as the few-shot learning model classified CT scan images into Normal, COVID-19, and Community-Acquired Pneumonia. The suggested model achieved an overall accuracy of 98.719%, a specificity of 99.36%, a sensitivity of 98.72%, and a ROC score of 99.9% with only 200 CT scans per category for training data.

4.
Appl Soft Comput ; 131: 109683, 2022 Dec.
Article in English | MEDLINE | ID: covidwho-2068704

ABSTRACT

Worldwide COVID-19 is a highly infectious and rapidly spreading disease in almost all age groups. The Computed Tomography (CT) scans of lungs are found to be accurate for the timely diagnosis of COVID-19 infection. In the proposed work, a deep learning-based P-shot N-ways Siamese network along with prototypical nearest neighbor classifiers is implemented for the classification of COVID-19 infection from lung CT scan slices. For this, a Siamese network with an identical sub-network (weight sharing) is used for image classification with a limited dataset for each class. The feature vectors are obtained from the pre-trained sub-networks having weight sharing. The performance of the proposed methodology is evaluated on the benchmark MosMed dataset having categories zero (healthy control) and numerous COVID-19 infections. The proposed methodology is evaluated on (a) chest CT scans provided by medical hospitals in Moscow, Russia for 1110 patients, and (b) case study of low-dose CT scans of 42 patients provided by Avtaran healthcare in India. The deep learning-based Siamese network (15-shot 5-ways) obtained an accuracy of 98.07%, the sensitivity of 95.66%, specificity of 98.83%, and F1-score of 95.10%. The proposed work outperforms the COVID-19 infection severity classification with limited scans availability for numerous infection categories.

5.
IEEE Sensors Journal ; : 1-1, 2022.
Article in English | Scopus | ID: covidwho-2018957

ABSTRACT

The recent SARS-COV-2 virus, also known as COVID-19, badly affected the world’s healthcare system due to limited medical resources for a large number of infected human beings. Quarantine helps in breaking the spread of the virus for such communicable diseases. This work proposes a non-wearable/contactless system for human location and activity recognition using ubiquitous wireless signals. The proposed method utilizes the Channel State Information (CSI) of the wireless signals recorded through a low-cost device for estimating the location and activity of the person under quarantine. We propose to utilize a Siamese architecture with combined one-dimensional Convolutional Neural Networks (1D-CNN) and Bi-directional long-short term memory (Bi-LSTM) networks. The proposed method provides high accuracy for the joint task and is validated on two real-world testbeds. First, using the designed low-cost CSI recording hardware, and second, on a public dataset for joint activity and location estimation. The HAR results outperform state-of-the-art machine and deep learning methods, and localization results are comparable with the existing methods. IEEE

6.
3rd International Conference for Emerging Technology, INCET 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2018889

ABSTRACT

Face Recognition is a deeply studied and researched domain. There are quite large solutions and model architectures to tackle majority of the face recognition related concerns. In this work we come up with a more specific version of face recognition which can mainly be used to achieve long distance and natural limitations of CCTV identification in real world scenarios using existing methods to better modifications. The solution which paper proposes using deep learning can be used to recognise a person even if they wear face masks due to the Covid-19 pandemic. One-shot learning is incorporated which can be used to train the specific model with just one image per person of the individual to be recognized. The designed model is modified from Siamese network architecture trained in triplet loss function to achieve these requirements. © 2022 IEEE.

7.
2nd International Conference on Advance Computing and Innovative Technologies in Engineering, ICACITE 2022 ; : 2597-2600, 2022.
Article in English | Scopus | ID: covidwho-1992624

ABSTRACT

Human faces being highly dynamic, are extensively studied in the field of pattern recognition, computer vision and artificial intelligence. Moreover, identification of faces using a part of it still remains an understudied domain. Detection of faces using just uncovered eye images can be a boon for surveillance and security especially in times of Covid-19 when most people are advised to cover their faces in a pub-lic space. In this paper we present a system, which identifies the person's face using the visible eye region namely the eyes and the forehead portions of the per-son. The model is trained over basic convolution net-work and the classification is done using Siamese net-works. The classification accuracy is measured using the dis-similarity score which calculates the euclidean distance between the converted feature vectors of the eye regions. The regions which are similar have neg-ligible dissimilarity score. © 2022 IEEE.

8.
Neural Comput Appl ; 34(14): 12143-12157, 2022.
Article in English | MEDLINE | ID: covidwho-1971717

ABSTRACT

Extreme learning machine (ELM) is a powerful classification method and is very competitive among existing classification methods. It is speedy at training. Nevertheless, it cannot perform face verification tasks properly because face verification tasks require the comparison of facial images of two individuals simultaneously and decide whether the two faces identify the same person. The ELM structure was not designed to feed two input data streams simultaneously. Thus, in 2-input scenarios, ELM methods are typically applied using concatenated inputs. However, this setup consumes two times more computational resources, and it is not optimized for recognition tasks where learning a separable distance metric is critical. For these reasons, we propose and develop a Siamese extreme learning machine (SELM). SELM was designed to be fed with two data streams in parallel simultaneously. It utilizes a dual-stream Siamese condition in the extra Siamese layer to transform the data before passing it to the hidden layer. Moreover, we propose a Gender-Ethnicity-dependent triplet feature exclusively trained on various specific demographic groups. This feature enables learning and extracting useful facial features of each group. Experiments were conducted to evaluate and compare the performances of SELM, ELM, and deep convolutional neural network (DCNN). The experimental results showed that the proposed feature could perform correct classification at 97.87 % accuracy and 99.45 % area under the curve (AUC). They also showed that using SELM in conjunction with the proposed feature provided 98.31 % accuracy and 99.72 % AUC. SELM outperformed the robust performances over the well-known DCNN and ELM methods.

9.
21st International Conference on Image Analysis and Processing, ICIAP 2022 ; 13231 LNCS:368-378, 2022.
Article in English | Scopus | ID: covidwho-1877766

ABSTRACT

Periocular recognition has attracted attention in recent times. The advent of the COVID-19 pandemic and the consequent obligation to wear facial masks made face recognition problematic due to the important occlusion of the lower part of the face. In this work, a dual-input Neural Network architecture is proposed. The structure is a Siamese-like model, with two identical parallel streams (called base models) that process the two inputs separately. The input is represented by RGB images of the right eye and the left eye belonging to the same subject. The outputs of the two base models are merged through a fusion layer. The aim is to investigate how deep feature aggregation affects periocular recognition. The experimentation is performed on the Masked Face Recognition Database (M 2 FRED) which includes videos of 46 participants with and without masks. Three different fusion layers are applied to understand which type of merging technique is most suitable for data aggregation. Experimental results show promising performance for almost all experimental configurations with a worst-case accuracy of 90% and a best-case accuracy of 97%. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

10.
4th International Conference on Computing and Communications Technologies, ICCCT 2021 ; : 500-507, 2021.
Article in English | Scopus | ID: covidwho-1769595

ABSTRACT

The Covid 19 Pandemic has had an impact on many aspects of our daily lives such as Restricting contact through touch, wearing masks, practicing social distancing, staying indoors which has led to change in our behaviors and prioritized the importance of safety hygiene. We travel to different places such as Schools, Colleges, Restaurants, offices, and Hospitals. How do we adapt to these changes and refrain from getting the virus? Luckily, we have the technology to aid us. We are all used to biometric systems for marking our Presence/ Attendance in places like colleges, Offices, and Schools with fingerprint sensors, fingerprint sensors use our Fingerprint to mark our presence however Covid 19 has restricted the use of touch causing problems in marking attendance. One way to resolve the problem is using Artificial Intelligence by using a Recognizer to identify people with their face and iris features. We implement the Face Recognition and the Iris Recognition using two models which run concurrently, one to Recognize the Face by extracting the features of the face and passing the 128-d points to the Neural Network (Mobile net and Resnet Architecture). which gives the identity of the person whose image was matched with the trained database and the other by extracting iris features to recognize people. For extracting iris features we use the Gabor filter to extract features from the eyes which are then matched in the database for recognition using 3 distance-based matching algorithms city block distance, Euclidean distance, and cosine distance which gives an accuracy of 88.19%, 84.95%, and 85.42% respectively. The face Recognizer model yields an Accuracy of 98%, while Iris Recognizer yields an accuracy of 88%. When these models run concurrently it yields an accuracy of 92.4%. © 2021 IEEE.

11.
Diagnostics (Basel) ; 12(3)2022 Mar 18.
Article in English | MEDLINE | ID: covidwho-1760434

ABSTRACT

Chest X-ray (CXR) is becoming a useful method in the evaluation of coronavirus disease 19 (COVID-19). Despite the global spread of COVID-19, utilizing a computer-aided diagnosis approach for COVID-19 classification based on CXR images could significantly reduce the clinician burden. There is no doubt that low resolution, noise and irrelevant annotations in chest X-ray images are a major constraint to the performance of AI-based COVID-19 diagnosis. While a few studies have made huge progress, they underestimate these bottlenecks. In this study, we propose a super-resolution-based Siamese wavelet multi-resolution convolutional neural network called COVID-SRWCNN for COVID-19 classification using chest X-ray images. Concretely, we first reconstruct high-resolution (HR) counterparts from low-resolution (LR) CXR images in order to enhance the quality of the dataset for improved performance of our model by proposing a novel enhanced fast super-resolution convolutional neural network (EFSRCNN) to capture texture details in each given chest X-ray image. Exploiting a mutual learning approach, the HR images are passed to the proposed Siamese wavelet multi-resolution convolutional neural network to learn the high-level features for COVID-19 classification. We validate the proposed COVID-SRWCNN model on public-source datasets, achieving accuracy of 98.98%. Our screening technique achieves 98.96% AUC, 99.78% sensitivity, 98.53% precision, and 98.86% specificity. Owing to the fact that COVID-19 chest X-ray datasets are low in quality, experimental results show that our proposed algorithm obtains up-to-date performance that is useful for COVID-19 screening.

12.
1st International Conference on Advanced Network Technologies and Intelligent Computing, ANTIC 2021 ; 1534 CCIS:517-529, 2022.
Article in English | Scopus | ID: covidwho-1750540

ABSTRACT

In modern days, face recognition is a critical aspect of security and surveillance. Face recognition techniques are widely used for mobile devices and public surveillance. Occlusion is a challenge while designing face recognition applications. In the COVID19 pandemic, we are advised to wear a face mask in public places. It helps us prevent the droplets from entering our body from a potential COVID19 positive person’s nose or mouth. However, it brings difficulty for the security personnel to identify the human face by seeing the partially exposed face. Most of the existing models are built based on the entire human face. It could either fail or perform poorly in the scenario as mentioned above. In this paper, a solution has been proposed by leveraging Siamese neural network for human face recognition from the partial human face. The prototype has been developed on the celebrity faces and validated with the state-of-the-art VGGFace2 (Resnet50) model. Our proposed model has performed well and provides very competitive results of 93% and 84.80 ± 4.71 % best-of-five and mean accuracy for partial face-images, respectively. © 2022, Springer Nature Switzerland AG.

13.
2021 Ethics and Explainability for Responsible Data Science Conference, EE-RDS 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1741178

ABSTRACT

In the current COVID-19 pandemic situation, there is an urgent need to properly diagnose whether people are infected by COVID-19 or not. Fast and accurate methods should be there to improve the efficiency of the health care system so that the infected people can be given priority treatment. Deep learning methods are largely incorporated in many medical fields, especially in medical diagnosis. Deep learning techniques could find patterns that can be attributed to various diseases. The main challenge in applying deep learning techniques in the medical field is the lack of availability of quality labelled data. Machine learning approaches, such as One-shot learning methods, are becoming increasingly popular in the medical community because they perform better with limited data. We compared several deep learning models for COVID image classification in this paper. State-of-the-art architectures like ResNet, EfficientNet were compared. We also present a technique that combines the triplet loss and cross-entropy loss functions. This technique enables the model to learn weights in such a way that it attempts to cluster different classes during classification. It will increase the model's interpretability and group together similar data. The dataset we used was made available as part of the Chest XR COVID 19 detection challenge. EfficientNet B7 model got the best result on the test set with 95.67% accuracy. Using Siamese Network, we were able to embed the images into a lower dimension in such a way that they can be clustered into different groups. Classification based on this embedded space obtained an accuracy of 93.76% in the test set. © 2021 IEEE.

14.
Diagnostics (Basel) ; 12(3)2022 Mar 15.
Article in English | MEDLINE | ID: covidwho-1742367

ABSTRACT

Coronavirus disease has rapidly spread globally since early January of 2020. With millions of deaths, it is essential for an automated system to be utilized to aid in the clinical diagnosis and reduce time consumption for image analysis. This article presents a generative adversarial network (GAN)-based deep learning application for precisely regaining high-resolution (HR) CXR images from low-resolution (LR) CXR correspondents for COVID-19 identification. Respectively, using the building blocks of GAN, we introduce a modified enhanced super-resolution generative adversarial network plus (MESRGAN+) to implement a connected nonlinear mapping collected from noise-contaminated low-resolution input images to produce deblurred and denoised HR images. As opposed to the latest trends of network complexity and computational costs, we incorporate an enhanced VGG19 fine-tuned twin network with the wavelet pooling strategy in order to extract distinct features for COVID-19 identification. We demonstrate our proposed model on a publicly available dataset of 11,920 samples of chest X-ray images, with 2980 cases of COVID-19 CXR, healthy, viral and bacterial cases. Our proposed model performs efficiently both on the binary and four-class classification. The proposed method achieves accuracy of 98.8%, precision of 98.6%, sensitivity of 97.5%, specificity of 98.9%, an F1 score of 97.8% and ROC AUC of 98.8% for the multi-class task, while, for the binary class, the model achieves accuracy of 99.7%, precision of 98.9%, sensitivity of 98.7%, specificity of 99.3%, an F1 score of 98.2% and ROC AUC of 99.7%. Our method obtains state-of-the-art (SOTA) performance, according to the experimental results, which is helpful for COVID-19 screening. This new conceptual framework is proposed to play an influential role in addressing the issues facing COVID-19 examination and other diseases.

15.
Healthcare (Basel) ; 10(2)2022 Feb 21.
Article in English | MEDLINE | ID: covidwho-1702956

ABSTRACT

Computed Tomography has become a vital screening method for the detection of coronavirus 2019 (COVID-19). With the high mortality rate and overload for domain experts, radiologists, and clinicians, there is a need for the application of a computerized diagnostic technique. To this effect, we have taken into consideration improving the performance of COVID-19 identification by tackling the issue of low quality and resolution of computed tomography images by introducing our method. We have reported about a technique named the modified enhanced super resolution generative adversarial network for a better high resolution of computed tomography images. Furthermore, in contrast to the fashion of increasing network depth and complexity to beef up imaging performance, we incorporated a Siamese capsule network that extracts distinct features for COVID-19 identification.The qualitative and quantitative results establish that the proposed model is effective, accurate, and robust for COVID-19 screening. We demonstrate the proposed model for COVID-19 identification on a publicly available dataset COVID-CT, which contains 349 COVID-19 and 463 non-COVID-19 computed tomography images. The proposed method achieves an accuracy of 97.92%, sensitivity of 98.85%, specificity of 97.21%, AUC of 98.03%, precision of 98.44%, and F1 score of 97.52%. Our approach obtained state-of-the-art performance, according to experimental results, which is helpful for COVID-19 screening. This new conceptual framework is proposed to play an influential task in the issue facing COVID-19 and related ailments, with the availability of few datasets.

16.
Neural Netw ; 142: 316-328, 2021 Oct.
Article in English | MEDLINE | ID: covidwho-1392462

ABSTRACT

Recently, tracking models based on bounding box regression (such as region proposal networks), built on the Siamese network, have attracted much attention. Despite their promising performance, these trackers are less effective in perceiving the target information in the following two aspects. First, existing regression models cannot take a global view of a large-scale target since the effective receptive field of a neuron is too small to cover the target with a large scale. Second, the neurons with a fixed receptive field (RF) size in these models cannot adapt to the scale and aspect ratio changes of the target. In this paper, we propose an adaptive ensemble perception tracking framework to address these issues. Specifically, we first construct a per-pixel prediction model, which predicts the target state at each pixel of the correlated feature. On top of the per-pixel prediction model, we then develop a confidence-guided ensemble prediction mechanism. The ensemble mechanism adaptively fuses the predictions of multiple pixels with the guidance of confidence maps, which enlarges the perception range and enhances the adaptive perception ability at the object-level. In addition, we introduce a receptive field adaption model to enhance the adaptive perception ability at the neuron-level, which adjusts the RF by adaptively integrating the features with different RFs. Extensive experimental results on the VOT2018, VOT2016, UAV123, LaSOT, and TC128 datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods in terms of accuracy and speed.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Perception , Attention
17.
Pattern Recognit ; 113: 107700, 2021 May.
Article in English | MEDLINE | ID: covidwho-867034

ABSTRACT

Various AI functionalities such as pattern recognition and prediction can effectively be used to diagnose (recognize) and predict coronavirus disease 2019 (COVID-19) infections and propose timely response (remedial action) to minimize the spread and impact of the virus. Motivated by this, an AI system based on deep meta learning has been proposed in this research to accelerate analysis of chest X-ray (CXR) images in automatic detection of COVID-19 cases. We present a synergistic approach to integrate contrastive learning with a fine-tuned pre-trained ConvNet encoder to capture unbiased feature representations and leverage a Siamese network for final classification of COVID-19 cases. We validate the effectiveness of our proposed model using two publicly available datasets comprising images from normal, COVID-19 and other pneumonia infected categories. Our model achieves 95.6% accuracy and AUC of 0.97 in diagnosing COVID-19 from CXR images even with a limited number of training samples.

SELECTION OF CITATIONS
SEARCH DETAIL